Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 67
Filter
Add filters

Journal
Document Type
Year range
1.
Systems ; 11(5), 2023.
Article in English | Web of Science | ID: covidwho-20244892

ABSTRACT

The COVID-19 outbreak devastated business operations and the world economy, especially for small and medium-sized enterprises (SMEs). With limited capital, poorer risk tolerance, and difficulty in withstanding prolonged crises, SMEs are more vulnerable to pandemics and face a higher risk of shutdown. This research sought to establish a model response to shutdown risk by investigating two questions: How do you measure SMEs' shutdown risk due to pandemics? How do SMEs reduce shutdown risk? To the best of our knowledge, existing studies only analyzed the impact of the pandemic on SMEs through statistical surveys and trivial recommendations. Particularly, there is no case study focusing on an elaboration of SMEs' shutdown risk. We developed a model to reduce cognitive uncertainty and differences in opinion among experts on COVID-19. The model was built by integrating the improved Dempster's rule of combination and a Bayesian network, where the former is based on the method of weight assignment and matrix analysis. The model was first applied to a representative SME with basic characteristics for survival analysis during the pandemic. The results show that this SME has a probability of 79% on a lower risk of shutdown, 15% on a medium risk of shutdown, and 6% of high risk of shutdown. SMEs solving the capital chain problem and changing external conditions such as market demand are more difficult during a pandemic. Based on the counterfactual elaboration of the inferred results, the probability of occurrence of each risk factor was obtained by simulating the interventions. The most likely causal chain analysis based on counterfactual elaboration revealed that it is simpler to solve employee health problems. For the SMEs in the study, this approach can reduce the probability of being at high risk of shutdown by 16%. The results of the model are consistent with those identified by the SME respondents, which validates the model.

2.
CEUR Workshop Proceedings ; 3395:354-360, 2022.
Article in English | Scopus | ID: covidwho-20240635

ABSTRACT

In this paper, team University of Botswana Computer Science (UBCS) investigate the opinions of Twitter users towards vaccine uptake. In particular, we build three different text classifiers to detect people's opinions and classify them as provax-for opinions that are for vaccination, antivax for opinions against vaccination and neutral-for opinions that are neither for or against vaccination. Two different datasets obtained from Twitter, 1 by Cotfas and the other by Fire2022 Organizing team were merged to and used for this study. The dataset contained 4392 tweets. Our first classifier was based on the basic BERT model and the other 2 were machine learning models, Random Forest and Multinomial Naive Bayes models. Naive Bayes classifier outperformed other classifiers with a macro-F1 score of 0.319. © 2022 Copyright for this paper by its authors.

3.
2022 International Conference on Technology Innovations for Healthcare, ICTIH 2022 - Proceedings ; : 34-37, 2022.
Article in English | Scopus | ID: covidwho-20235379

ABSTRACT

Training a Convolutional Neural Network (CNN) is a difficult task, especially for deep architectures that estimate a large number of parameters. Advanced optimization algorithms should be used. Indeed, it is one of the most important steps to reduce the error between the ground truth and the model prediction. In this sense, many methods have been proposed to solve the optimization problems. In general, regularization, more specifically, non-smooth regularization, can be used in order to build sparse networks, which make the optimization task difficult. The main aim is to develop a novel optimizer based on Bayesian framework. Promising results are obtained when our optimizer is applied on classification of Covid-19 images. By using the proposed approach, an accuracy rate equal to 94% is obtained surpasses all the competing optimizers that do not exceed an accuracy rate of 86%, and 84% for standard Deep Learning optimizers. © 2022 IEEE.

4.
Front Bioinform ; 3: 1163430, 2023.
Article in English | MEDLINE | ID: covidwho-20244373

ABSTRACT

Objective: Obesity is a significant risk factor for adverse outcomes following coronavirus infection (COVID-19). However, BMI fails to capture differences in the body fat distribution, the critical driver of metabolic health. Conventional statistical methodologies lack functionality to investigate the causality between fat distribution and disease outcomes. Methods: We applied Bayesian network (BN) modelling to explore the mechanistic link between body fat deposition and hospitalisation risk in 459 participants with COVID-19 (395 non-hospitalised and 64 hospitalised). MRI-derived measures of visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), and liver fat were included. Conditional probability queries were performed to estimate the probability of hospitalisation after fixing the value of specific network variables. Results: The probability of hospitalisation was 18% higher in people living with obesity than those with normal weight, with elevated VAT being the primary determinant of obesity-related risk. Across all BMI categories, elevated VAT and liver fat (>10%) were associated with a 39% mean increase in the probability of hospitalisation. Among those with normal weight, reducing liver fat content from >10% to <5% reduced hospitalisation risk by 29%. Conclusion: Body fat distribution is a critical determinant of COVID-19 hospitalisation risk. BN modelling and probabilistic inferences assist our understanding of the mechanistic associations between imaging-derived phenotypes and COVID-19 hospitalisation risk.

5.
17th International Conference on Indoor Air Quality and Climate, INDOOR AIR 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2322568

ABSTRACT

In recent work, a Hierarchical Bayesian model was developed to predict occupants' thermal comfort as a function of thermal indoor environmental conditions and indoor CO2 concentrations. The model was trained on two large IEQ field datasets consisting of physical and subjective measurements of IEQ collected from over 900 workstations in 14 buildings across Canada and the US. Posterior results revealed that including measurements of CO2 in thermal comfort modelling credibly increases the prediction accuracy of thermal comfort and in a manner that can support future thermal comfort prediction. In this paper, the predictive model of thermal comfort is integrated into a building energy model (BEM) that simulates an open-concept mechanically-ventilated office space located in Vancouver. The model predicts occupants' thermal satisfaction and heating energy consumption as a function of setpoint thermal conditions and indoor CO2 concentrations such that, for the same thermal comfort level, higher air changes per hour can be achieved by pumping a higher amount of less-conditioned fresh air. The results show that it is possible to reduce the energy demand of increasing fresh air ventilation rates in winter by decreasing indoor air temperature setpoints in a way that does not affect perceived thermal satisfaction. This paper presents a solution for building managers that have been under pressure to increase current ventilation rates during the COVID-19 pandemic. © 2022 17th International Conference on Indoor Air Quality and Climate, INDOOR AIR 2022. All rights reserved.

6.
29th Annual IEEE International Conference on High Performance Computing, Data, and Analytics, HiPC 2022 ; : 176-185, 2022.
Article in English | Scopus | ID: covidwho-2322398

ABSTRACT

The COVID-19 pandemic has necessitated disease surveillance using group testing. Novel Bayesian methods using lattice models were proposed, which offer substantial improvements in group testing efficiency by precisely quantifying uncertainty in diagnoses, acknowledging varying individual risk and dilution effects, and guiding optimally convergent sequential pooled test selections. Computationally, however, Bayesian group testing poses considerable challenges as computational complexity grows exponentially with sample size. HPC and big data stacks are needed for assessing computational and statistical performance across fluctuating prevalence levels at large scales. Here, we study how to design and optimize critical computational components of Bayesian group testing, including lattice model representation, test selection algorithms, and statistical analysis schemes, under the context of parallel computing. To realize this, we propose a high-performance Bayesian group testing framework named HiBGT, based on Apache Spark, which systematically explores the design space of Bayesian group testing and provides comprehensive heuristics on how to achieve high-performance, highly scalable Bayesian group testing. We show that HiBGT can perform large-scale test selections (> 250 state iterations) and accelerate statistical analyzes up to 15.9x (up to 363x with little trade-offs) through a varied selection of sophisticated parallel computing techniques while achieving near linear scalability using up to 924 CPU cores. © 2022 IEEE.

7.
20th International Learning and Technology Conference, L and T 2023 ; : 120-127, 2023.
Article in English | Scopus | ID: covidwho-2316285

ABSTRACT

Covid-19 has had a destructive influence on global economics, social life, education, and technologies. The rise of the Covid-19 pandemic has increased the use of digital tools and technologies for epidemic control. This research uses machine learning (ML) models to identify populated areas and predict the disease's risk and impact. The proposed system requires only details about mask utilization, temperature, and distance between individuals, which helps protect the individual's privacy. The gathered data is transferred to an ML engine in the cloud to determine the risk probability of public areas concerning Covid-19. Extracted data are input for multiple ML techniques such as Random Forest (RF), Decision tree (DT), Naive Bayes classifier(NBC), Neural network(NN), and Support vector machine (SVM). Expectation maximization (EM), K-means, Density, Filtered, and Farthest first (FF) clustering algorithms are applied for clustering. Compared to other algorithms, the K-means produces better superior accuracy. The regression technique is utilized for prediction. The outcomes of several methods are compared, and the most suitable ML algorithms utilized in this study are used to identify high-risk locations. In comparison to other identical architectures, the suggested architecture retains excellent accuracies. It is observed that the time taken to build the model using locally weighted learning(LWL) was 0.02 seconds, and the NN took more time to build, which is 0.90 seconds. To test the model, an LWL algorithm took more time which is 1.73 seconds, and the NN took less time to test, which is 0.02 seconds. The NBC has a 99.38 percent accuracy, the RF classifier has a 97.33 percent accuracy, and the DT has a 94.51 percent accuracy for the same data set. These algorithms have significant possibilities for predicting the likelihood of crowd risks of Covid-19 in a public space. This approach generates automatic notifications to concerned government authorities in any aberrant detection. This study is likely to aid researchers in modeling healthcare systems and spur additional research into innovative technology. © 2023 IEEE.

8.
International Journal of Approximate Reasoning ; : 108929, 2023.
Article in English | ScienceDirect | ID: covidwho-2307413

ABSTRACT

Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In the literature, this influence is often measured by computing the partial derivative with respect to the network parameters. However, this can become computationally expensive in large networks with thousands of parameters. We propose an algorithm combining automatic differentiation and exact inference to calculate the sensitivity measures in a single pass efficiently. It first marginalizes the whole network once, using e.g. variable elimination, and then backpropagates this operation to obtain the gradient with respect to all input parameters. Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions. Simulation studies highlight the efficiency of our algorithm by scaling it to massive networks with up to 100,000 parameters and investigate the feasibility of generic multi-way analyses. Our routines are also showcased over two medium-sized Bayesian networks: the first modeling the country risks of a humanitarian crisis, the second studying the relationship between the use of technology and the psychological effects of forced social isolation during the COVID-19 pandemic. An implementation of the methods using the popular machine learning library PyTorch is freely available.

9.
12th International Conference on Software Technology and Engineering, ICSTE 2022 ; : 113-118, 2022.
Article in English | Scopus | ID: covidwho-2293502

ABSTRACT

Due to the rise of severe and acute infections called Coronavirus 19, contact tracing has become a critical subject in medical science. A system for automatically detecting diseases aids medical professionals in disease diagnosis to lessen the death rate of patients. To automatically diagnose COVID-19 from contact tracing, this research seeks to offer a deep learning technique based on integrating a Bayesian Network and K-Anonymity. In this system, data classification is done using the Bayesian Network Model. For privacy concerns, the K-Anonymity algorithm is utilized to prevent malicious users from accessing patients' personal information. The dataset for this system consisted of 114 patients. The researchers proposed methods such as the K-Anonymity model to remove personal information. The age group and occupations were replaced with more extensive categories such as age range and numbers of employed and unemployed. Further, the accuracy score for the Bayesian Network with k-Anonymity is 97.058%, which is an exceptional accuracy score. On the other hand, the Bayesian Network without k-Anonymity has an accuracy score of 97.1429%. These two have a minimal percent difference, indicating that they are both excellent and accurate models. The system produced the desired results on the currently available dataset. The researchers can experiment with other approaches to address the problem statements in the future by utilizing other algorithms besides the Bayesian one, observing how they perform on the dataset, and testing the algorithm with undersampled data to evaluate how it performs. In addition, researchers should also gather more information from various sources to improve the sample size distribution and make the model sufficiently fair to generate accurate predictions. © 2022 IEEE.

10.
IEEE Transactions on Engineering Management ; : 1-14, 2023.
Article in English | Scopus | ID: covidwho-2292273

ABSTRACT

In a closed-loop supply chain (CLSC), acquiring end-of-life vehicles (ELVs) and their components from both primary and secondary markets has posed a huge uncertainty and risk. Moreover, the constant supply of ELV components with minimization of cost and exploitation of natural resources is another pressing challenge. To address the issues, the present study has developed a risk simulation framework to study market uncertainty/risk in a CLSC. In the first phase of the framework, a total of 12 important variables are identified from the existing studies. The total interpretive structural model (TISM) is used to develop a causal relationship network among the variables. Then, Matriced Impacts Cruoses Multiplication Applique a un Classement is used for determining the nature of relationships (i.e., driving or dependence power). In the second phase, the relationship of TISM is used to derive a Bayesian belief network model for determining the level of risks (i.e., high, medium, and low) associated with the CLSC through the generation of conditional probabilities across 1) multi-, 2) single-, and 3) without-parent nodes. The study findings will help decision-makers in adopting strategic and operational interventions to increase the effectiveness and resiliency of the network. Furthermore, it will help practitioners to make decisions on change management implementation for stakeholders'performance audits on the attributes of the ELV recovery program and developing resilience in the CLSC network. Overall, the present study holistically contributes to a broader investigation of the implications of strategic decisions in automobile manufacturers and resellers. IEEE

11.
Energy Economics ; 121, 2023.
Article in English | Scopus | ID: covidwho-2305099

ABSTRACT

We present a weekly structural Vector Autoregressive model of the US crude oil market. Exploiting weekly data we can explain short-run crude oil price dynamics, including variations related with the COVID-19 pandemic and with the Russia's invasion of Ukraine. The model is set identified with a Bayesian approach that allows to impose restrictions directly on structural parameters of interest, such as supply and demand elasticizes. Our model incorporates both the futures-spot price spread to capture shocks to the real price of crude oil driven by changes in expectations and US inventories to describe price fluctuations due to unexpected variations of above-ground stocks. Including the futures-spot price spread is key for accounting for feedback effects from the financial to the physical market for crude oil and for identifying a new structural shock that we label expectational shock. This shock plays a crucial role when describing the series of events that have led to the spike in the price of crude oil recorded in the aftermath of Russia's invasion of Ukraine. © 2023 Elsevier B.V.

12.
Lecture Notes on Data Engineering and Communications Technologies ; 165:480-493, 2023.
Article in English | Scopus | ID: covidwho-2304033

ABSTRACT

Sumatra Island is the third largest island with the second largest population in Indonesia which has the following eight provinces: Aceh, North Sumatra, West Sumatra, Riau, Jambi, South Sumatra, Bengkulu and Lampung. The connectivity of these eight provinces in the economic field is very strong. This encourages high mobility between these provinces. During this Covid-19 pandemic, the high mobility between provinces affects the level of spread of Covid-19 on the island of Sumatra. The central government ordered local governments to implement a community activity restriction program called PPKM. In this article, a study is conducted on the impact of the PKKM program on the spread of Covid 19 on the island of Sumatra, Indonesia. The spread of Covid-19 is modeled using the Susceptible-Infected-Recovered-Death (SIRD) model which considers the mobility factor of the population. The model parameters were estimated using Approximate Bayesian Computation (ABC). The results of the study using this model show that the application of PKKM in several provinces in Sumatra can reduce the level of spread of COVID-19. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

13.
International Journal of Construction Management ; 2023.
Article in English | Scopus | ID: covidwho-2273133

ABSTRACT

Globally, COVID-19 had devastating consequences on the construction sector, but there is a little knowledge of how the pandemic impacted developing nations. This study aims to investigate the job stability of employees in the construction sector and to highlight the factors that negatively affect employment. A survey of 1000 questionnaires was distributed to construction sector personnel and 436 valid responses were returned. Three popular data mining techniques were utilized: binary logistic regression, support vector machine, and Bayesian networks. Twelve models were developed;one binary logistic model, four support vector machine models, and six Bayesian networks models. The results of these models were compared based on accuracy, sensitivity, specificity, precision, recall, F-measure, and ROC Area. As a result, it was found that the method of the Bayesian networks was more effective in modelling the job stability of employees in comparison with the other models. According to our study, craft labourers were most affected in terms of job losses, followed by site engineers, while project managers and contractors were the least affected. The findings highlight the importance of protecting the most vulnerable labourers by revising the current legislation policy initiatives that should concentrate on establishing a better and more sustainable labour market. © 2023 Informa UK Limited, trading as Taylor & Francis Group.

14.
Big Data and Cognitive Computing ; 7(1), 2023.
Article in English | Scopus | ID: covidwho-2252136

ABSTRACT

Artificial intelligence (AI) is a branch of computer science that allows machines to work efficiently, can analyze complex data. The research focused on AI has increased tremendously, and its role in healthcare service and research is emerging at a greater pace. This review elaborates on the opportunities and challenges of AI in healthcare and pharmaceutical research. The literature was collected from domains such as PubMed, Science Direct and Google scholar using specific keywords and phrases such as ‘Artificial intelligence', ‘Pharmaceutical research', ‘drug discovery', ‘clinical trial', ‘disease diagnosis', etc. to select the research and review articles published within the last five years. The application of AI in disease diagnosis, digital therapy, personalized treatment, drug discovery and forecasting epidemics or pandemics was extensively reviewed in this article. Deep learning and neural networks are the most used AI technologies;Bayesian nonparametric models are the potential technologies for clinical trial design;natural language processing and wearable devices are used in patient identification and clinical trial monitoring. Deep learning and neural networks were applied in predicting the outbreak of seasonal influenza, Zika, Ebola, Tuberculosis and COVID-19. With the advancement of AI technologies, the scientific community may witness rapid and cost-effective healthcare and pharmaceutical research as well as provide improved service to the general public. © 2023 by the authors.

15.
2022 Winter Simulation Conference, WSC 2022 ; 2022-December:496-507, 2022.
Article in English | Scopus | ID: covidwho-2285192

ABSTRACT

COVID-19 related crimes like counterfeit Personal Protective Equipment (PPE) involve complex supply chains with partly unobservable behavior and sparse data, making it challenging to construct a reliable simulation model. Model calibration can help with this, as it is the process of tuning and estimating the model parameters with observed data of the system. A subset of model calibration techniques seems to be able to deal with sparse data in other fields: Genetic Algorithms and Bayesian Inference. However, it is unknown how these techniques perform when accurately calibrating simulation models with sparse data. This research analyzes the quality-of-fit of these two model calibration techniques for a counterfeit PPE simulation model given an increasing degree of data sparseness. The results demonstrate that these techniques are suitable for calibrating a linear supply chain model with randomly missing values. Further research should focus on other techniques, larger set of models, and structural uncertainty. © 2022 IEEE.

16.
2022 IEEE International Conference on Big Data, Big Data 2022 ; : 1594-1603, 2022.
Article in English | Scopus | ID: covidwho-2248082

ABSTRACT

Real-time forecasting of non-stationary time series is a challenging problem, especially when the time series evolves rapidly. For such cases, it has been observed that ensemble models consisting of a diverse set of model classes can perform consistently better than individual models. In order to account for the nonstationarity of the data and the lack of availability of training examples, the models are retrained in real-time using the most recent observed data samples. Motivated by the robust performance properties of ensemble models, we developed a Bayesian model averaging ensemble technique consisting of statistical, deep learning, and compartmental models for fore-casting epidemiological signals, specifically, COVID-19 signals. We observed the epidemic dynamics go through several phases (waves). In our ensemble model, we observed that different model classes performed differently during the various phases. Armed with this understanding, in this paper, we propose a modification to the ensembling method to employ this phase information and use different weighting schemes for each phase to produce improved forecasts. However, predicting the phases of such time series is a significant challenge, especially when behavioral and immunological adaptations govern the evolution of the time series. We explore multiple datasets that can serve as leading indicators of trend changes and employ transfer entropy techniques to capture the relevant indicator. We propose a phase prediction algorithm to estimate the phases using the leading indicators. Using the knowledge of the estimated phase, we selectively sample the training data from similar phases. We evaluate our proposed methodology on our currently deployed COVID-19 forecasting model and the COVID-19 ForecastHub models. The overall performance of the proposed model is consistent across the pandemic. More importantly, it is ranked second during two critical rapid growth phases in cases, regimes where the performance of most models from the ForecastHub dropped significantly. © 2022 IEEE.

17.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

18.
21st International Conference on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes, HARMO 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2208004

ABSTRACT

An inversion system that uses a Bayesian approach to combine measurements and ADMS-Urban modelled data by adjusting individual source emissions, subject to estimated uncertainty in the measurements and emissions, has previously been applied to optimising road traffic emissions in Cambridge. In this study the system has been applied specifically to the impact of interventions, in particular the impact of COVID-19 lockdowns on NOX emissions from road traffic and other sources in London. The ADMS-Urban model was used to calculate a priori hourly NOX concentrations at 195 receptors in London representing 115 reference monitors and 80 Breathe London Network AQMesh sensors. Input data included hourly meteorological measurements from Heathrow Airport, hourly NOX concentrations from 4 rural background monitoring sites and buildings road centreline data from Ordnance Survey. A priori emissions were obtained from the London Atmospheric Emissions Inventory (LAEI) for 35 point sources, approximately 70,000 major road sources and 2,500 1km grid cells representing minor road, heating and other sources. The analysis period was 1 January 2020 to 30 April 2021. Estimated uncertainties of 4 and 12 µg/m3 were applied to reference and sensor measurements respectively, while emissions uncertainties of 100%, 50%, 20% were applied to road traffic, fuels and other emissions respectively. Road traffic emissions were assumed to have error covariance of 40% of their emissions uncertainty. Measured NOX concentrations in London reduced significantly during lockdown, with the greatest reduction (around 60%) at kerbside and roadside sites in Central London. However, poor dispersal conditions led to increased concentrations at times when restrictions were tightest. In contrast, inversion system results demonstrate that NOX emissions from road traffic dropped by around 60% in London compared with pre-lockdown levels and that this reduction occurred when the strictest lockdown measures were in force. The results also show that NOX road traffic emissions were still approximately 30% lower than pre-lockdown levels at the end of April 2021. This analysis demonstrates that lower cost sensors such as AQMesh can provide valuable insight into the effects of policy measures (in this case lockdown restrictions), if their increased uncertainty compared with reference monitors is accounted for. © British Crown Copyright (2022)

19.
16th International Conference on Probabilistic Safety Assessment and Management, PSAM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2207865

ABSTRACT

The spread of the COVID-19 pandemic across the world has presented a unique problem to researchers and policymakers alike. In addition to uncertainty around the nature of the virus itself, the impact of rapidly changing policy decisions on the spread of the virus has been difficult to predict. Using an epidemiological Susceptible-Infected-Recovered-Dead (SIRD) model as a basis, this paper presents a methodology for modeling many uncertain factors impacting disease spread, ultimately to understand how a policy decision may impact the community long term. The COVID-19 Decision Support (CoviDeS) tool, utilizes an agent-based time simulation model that uses Bayesian networks to determine state changes of each individual. The model has a level of interpretability more extensive than many existing models, allowing for insights to be drawn regarding the relationships between various inputs and the transmission of the disease. Test cases are presented for different scenarios that demonstrate relative changes in transmission resulting from different policy decisions. Further, we will demonstrate the model's ability to support decisions for a smaller sub-community that is contained in a larger population center (e.g. a university within a city). Results of simulations for the city of Los Angeles are presented to demonstrate the use of the model for parametric analysis that could give insight to other real-world scenarios of interest. Though improvements can be made in the model's accuracy relative to real case data, the methods presented offer value for future use either as a predictive tool or as a decision-making tool for COVID-19 or future pandemic scenarios. © 2022 Probabilistic Safety Assessment and Management, PSAM 2022. All rights reserved.

20.
2022 IEEE Region 10 International Conference, TENCON 2022 ; 2022-November, 2022.
Article in English | Scopus | ID: covidwho-2192088

ABSTRACT

GDP or Gross Domestic Product is a key indicator of economic status, which provides an omni-comprehensive measure of the wealth of a country or a state. With the sudden proliferation of novel coronavirus disease (COVID-19), there has been increasing interest in forecasting GDP, since this may be severely impacted by the various pandemic control measures imposed in recent days. An accurate forecast of GDP can extensively help in putting forth right administrative measures while ensuring minimum disruption in economy. Though the recent researches focus on various machine learning-based data-driven models for this purpose, these primarily analyze the change in observed GDP data without explicitly modeling the pandemic impact. We address this issue by proposing a novel approach that incorporates epidemiological insights into Bayesian network-based predictive analytics to account for the influence of COVID-19 development on the GDP. Rigorous experimentation on state-level and country-level datasets of India demonstrates that a judicious combination of theoretical and data-driven models can substantially improve GDP forecast performance. Our model produces an average prediction error of 0.002% and outperforms several state-of-the-art techniques with a large margin. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL